TNG AI UnitTestGen
Automated Unit Test Generation for Java, Kotlin, C#, and TypeScript
Our TNG AI UnitTestGen tool is designed to automatically generate unit tests for projects written in Java, Kotlin, C#, and TypeScript. It leverages Large Language Models (LLMs) to create tests based on the logic of your source code, ensuring that any changes to the codebase do not unintentionally alter its functionality.
How it works
Our approach utilizes feedback loops from test compilation and execution to guarantee that the generated tests are functional. This process is autonomous, i.e. it does not require manual intervention from a software developer to guide the test generation. All code execution is confined to an isolated Docker environment, minimizing potential risks associated with executing LLM-generated code. Our tool automatically retrieves context for the tested code to ensure that the LLM receives the required information about the repository.
Our tool works with Java (supporting Maven and Gradle), Kotlin (supporting Gradle), C# (supporting .NET versions 6 and higher) and TypeScript (supporting the package managers npm and pnpm, and the testing frameworks Vitest and Jest). Extensions for JetBrains IDEs and VS Code simplify setup and streamline the workflow directly in the preferred editor.
Key benefits
TNG AI UnitTestGen is superior for generating tests compared to other LLM-based approaches, considering the following aspects:
Independence from developer oversight: Enables efficient and autonomous unit test generation without manual user intervention.
Quality assurance: We evaluate the quality of generated tests not just by code coverage, but also through mutation testing (where applicable).
Security and isolation: By executing generated code in a Docker environment, we ensure a safe and efficient test generation process that requires review only for the final version of the test.
Efficiency: Internal benchmarking has shown that our tool achieves similar results as other approaches but in significantly less time (30-50% of the runtime) and with fewer input tokens (<50%).
Detection of insufficient coverage: An optional initial analysis step reveals which test suites should be extended and for which parts of the source code testing can be skipped.
TNG AI UnitTestGen is designed to support developers in creating robust and reliable software by automating a crucial but tedious part of the development process, allowing for more time to focus on other aspects of project development.
Workflow in CLI
The screenshot shows a typical CLI workflow for generating a unit test suite with TNG AI UnitTestGen. The process begins by copying the repository to a temporary directory, ensuring that the original code base remains untouched during generation. It then targets the specified file, checks for existing tests, and – since none are found – creates a new test suite from scratch. The first version of the suite does not compile, so TNG AI UnitTestGen runs multiple fix-and-compile cycles until all compilation errors are resolved. Next, it executes the tests, detects a failing test, and runs an additional debugging cycle until all tests pass. Afterwards, mutation testing is performed. Non-contributing tests (those that do not improve the mutation score) are removed, and mutation testing is rerun. Because the resulting mutation coverage exceeds the configured threshold of 80%, the run is considered successful, and the generated tests are written back into the original code base.

Interested?
To help you get started with TNG AI UnitTestGen, TNG offers guided access to the tool along with comprehensive support for its integration. Consulting services from our expert team can also be provided to ensure a smooth transition. We can also assist you in choosing the right LLM backend and offer a personalized demonstration of UnitTestGen tailored to your specific use case.
Contact us via info@tngtech.com or use one of the other contact options to learn more about our tool.